Rate-optimal refinement strategies for local approximation MCMC

نویسندگان

چکیده

Many Bayesian inference problems involve target distributions whose density functions are computationally expensive to evaluate. Replacing the with a local approximation based on small number of carefully chosen evaluations can significantly reduce computational expense Markov chain Monte Carlo (MCMC) sampling. Moreover, continual refinement guarantee asymptotically exact We devise new strategy for balancing decay rate bias due that MCMC variance. prove error resulting (LA-MCMC) algorithm decays at roughly expected $$1/\sqrt{T}$$ rate, and we demonstrate this numerically. also introduce an algorithmic parameter guarantees convergence given very weak tail bounds, strengthening previous results. Finally, apply LA-MCMC intensive inverse problem arising in groundwater hydrology.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Partition of Unity Refinement for local approximation

In this article, we propose a Partition ofUnity Refinement (PUR)method to improve the local approximations of elliptic boundary value problems in regions of interest. The PUR method only needs to refine the local meshes and hanging nodes generate no difficulty. The mesh qualities such as uniformity or quasi-uniformity are kept. The advantages of the PUR include its effectiveness and relatively ...

متن کامل

Sampling Strategies for MCMC

There are many good methods for sampling Markov chains via streams of independent U [0, 1] random variables. Recently some non-random and some random but dependent driving sequences have been shown to result in consistent Markov chain sampling, sometimes with considerably improved accuracy. The key to consistent sampling is for the driving sequence to be completely uniformly distributed (CUD) o...

متن کامل

Basis refinement strategies for linear value function approximation in MDPs

We provide a theoretical framework for analyzing basis function construction for linear value function approximation in Markov Decision Processes (MDPs). We show that important existing methods, such as Krylov bases and Bellman-errorbased methods are a special case of the general framework we develop. We provide a general algorithmic framework for computing basis function refinements which “res...

متن کامل

Local Bandit Approximation for Optimal Learning Problems

In general, procedures for determining Bayes-optimal adaptive controls for Markov decision processes (MDP's) require a prohibitive amount of computation-the optimal learning problem is intractable. This paper proposes an approximate approach in which bandit processes are used to model, in a certain "local" sense, a given MDP. Bandit processes constitute an important subclass of MDP's, and have ...

متن کامل

Controlled MCMC for Optimal Sampling

In this paper we develop an original and general framework for automatically optimizing the statistical properties of Markov chain Monte Carlo (MCMC) samples, which are typically used to evaluate complex integrals. The Metropolis-Hastings algorithm is the basic building block of classical MCMC methods and requires the choice of a proposal distribution, which usually belongs to a parametric fami...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Statistics and Computing

سال: 2022

ISSN: ['0960-3174', '1573-1375']

DOI: https://doi.org/10.1007/s11222-022-10123-0